63 research outputs found

    Understanding and Exploiting Optimal Function Inlining

    Get PDF

    Kinect Enabled Monte Carlo Localisation for a Robotic Wheelchair

    Get PDF
    Proximity sensors and 2D vision methods have shown to work robustly in particle filter-based Monte Carlo Localisation (MCL). It would be interesting however to examine whether modern 3D vision sensors would be equally efficient for localising a robotic wheelchair with MCL. In this work, we introduce a visual Region Locator Descriptor, acquired from a 3D map using the Kinect sensor to conduct localisation. The descriptor segments the Kinect’s depth map into a grid of 36 regions, where the depth of each column-cell is being used as a distance range for the measurement model of a particle filter. The experimental work concentrated on a comparison of three different localization cases. (a) an odometry model without MCL, (b) with MCL and sonar sensors only, (c) with MCL and the Kinect sensor only. The comparative study demonstrated the efficiency of a modern 3D depth sensor, such as the Kinect, which can be used reliably for wheelchair localisation

    A recursive Bayesian filter for landmark-based localisation of a wheelchair robot

    Get PDF
    An odometry model, represented by a set of nodes (waypoints), is considered to be the infrastructure of any probabilistic-based localisation method. Gaussian and nonparametric filters utilise an odometry model to localise robots, while predictions are made by the filters to actively correct the robot's location and coordination. In this work, we present a recursive Bayesian filter for landmark recognition, which is used to verify the pose of a robotic wheelchair at a certain node location. The Bayesian rule in the proposed method does not incorporate a control action to rectify the robot's pose (passive localisation). The filter approximates the robot's pose based on a feature extraction sensor model. Features are extracted from local environmental regions (landmarks), and each landmark is assigned with a distinct posterior probability (signature), at each node location. A node is verified by the robot when the covariance between the posterior and prior probability falls bellow a threshold. We tested the proposed method in an indoor environment where accurate localisation results have been obtained. The experimentation demonstrated the robustness of the filter to work for passive localisation

    Towards human-friendly efficient control of multi-robot teams

    Get PDF
    This paper explores means to increase efficiency in performing tasks with multi-robot teams, in the context of natural Human-Multi-Robot Interfaces (HMRI) for command and control. The motivating scenario is an emergency evacuation by a transport convoy of unmanned ground vehicles (UGVs) that have to traverse, in shortest time, an unknown terrain. In the experiments the operator commands, in minimal time, a group of rovers through a maze. The efficiency of performing such tasks depends on both, the levels of robots' autonomy, and the ability of the operator to command and control the team. The paper extends the classic framework of levels of autonomy (LOA), to levels/hierarchy of autonomy characteristic of Groups (G-LOA), and uses it to determine new strategies for control. An UGVoriented command language (UGVL) is defined, and a mapping is performed from the human-friendly gesture-based HMRI into the UGVL. The UGVL is used to control a team of 3 robots, exploring the efficiency of different G-LOA; specifically, by (a) controlling each robot individually through the maze, (b) controlling a leader and cloning its controls to followers, and (c) controlling the entire group. Not surprisingly, commands at increased G-LOA lead to a faster traverse, yet a number of aspects are worth discussing in this context

    Finding Good Attribute Subsets for Improved Decision Trees Using a Genetic Algorithm Wrapper; a Supervised Learning Application in the Food Business Sector for Wine Type Classification

    Get PDF
    This study aims to provide a method that will assist decision makers in managing large datasets, eliminating the decision risk and highlighting significant subsets of data with certain weight. Thus, binary decision tree (BDT) and genetic algorithm (GA) methods are combined using a wrapping technique. The BDT algorithm is used to classify data in a tree structure, while the GA is used to identify the best attribute combinations from a set of possible combinations, referred to as generations. The study seeks to address the problem of overfitting that may occur when classifying large datasets by reducing the number of attributes used in classification. Using the GA, the number of selected attributes is minimized, reducing the risk of overfitting. The algorithm produces many attribute sets that are classified using the BDT algorithm and are assigned a fitness number based on their accuracy. The fittest set of attributes, or chromosomes, as well as the BDTs, are then selected for further analysis. The training process uses the data of a chemical analysis of wines grown in the same region but derived from three different cultivars. The results demonstrate the effectiveness of this innovative approach in defining certain ingredients and weights of wine’s origin

    Placenta abruption in a woman with Wilson’s disease: a case report

    Get PDF
    Wilson’s disease is a rare genetic disorder of copper metabolism that causes primary hepatic cirrhosis, secondary menstrual abnormalities and infertility. Following the appropriate therapy patients are asymptomatic and pregnancy may be achieved. We present a case of placental abruption in a pregnant woman with Wilson’s disease and we review the management dilemmas and treatment options of pregnant women with Wilson’s disease

    Fast Linear Programming through Transprecision Computing on Small and Sparse Data

    Get PDF
    A plethora of program analysis and optimization techniques rely on linear programming at their heart. However, such techniques are often considered too slow for production use. While today’s best solvers are optimized for complex problems with thousands of dimensions, linear programming, as used in compilers, is typically applied to small and seemingly trivial problems, but to many instances in a single compilation run. As a result, compilers do not benefit from decades of research on optimizing large-scale linear programming. We design a simplex solver targeted at compilers. A novel theory of transprecision computation applied from individual elements to full data-structures provides the computational foundation. By carefully combining it with optimized representations for small and sparse matrices and specialized small-coefficient algorithms, we (1) reduce memory traffic, (2) exploit wide vectors, and (3) use low-precision arithmetic units effectively. We evaluate our work by embedding our solver into a state-of-the-art integer set library and implement one essential operation, coalescing, on top of our transprecision solver. Our evaluation shows more than an order-of-magnitude speedup on the core simplex pivot operation and a mean speedup of 3.2x (vs. GMP) and 4.6x (vs. IMath) for the optimized coalescing operation. Our results demonstrate that our optimizations exploit the wide SIMD instructions of modern microarchitectures effectively. We expect our work to provide foundations for a future integer set library that uses transprecision arithmetic to accelerate compiler analyses.ISSN:2475-142

    6 Access Methods and Query Processing Techniques

    Get PDF
    The performance of a database management system (DBMS) is fundamentally dependent on the access methods and query processing techniques available to the system. Traditionally, relational DBMSs have relied on well-known access methods, such as the ubiquitous B +-tree, hashing with chaining, and, in som
    • …
    corecore